Estimation of human emotions plays an important role in the development of modern\nbrain-computer interface devices like the Emotiv EPOC+ headset. In this paper, we present\nan experiment to assess the classification accuracy of the emotional states provided by the headsetâ??s\napplication programming interface (API). In this experiment, several sets of images selected from the\nInternational Affective Picture System (IAPS) dataset are shown to sixteen participants wearing the\nheadset. Firstly, the participantsâ?? responses in form of a self-assessment manikin questionnaire to the\nemotions elicited are compared with the validated IAPS predefined valence, arousal and dominance\nvalues. After statistically demonstrating that the responses are highly correlated with the IAPS\nvalues, several artificial neural networks (ANNs) based on the multilayer perceptron architecture\nare tested to calculate the classification accuracy of the Emotiv EPOC+ API emotional outcomes.\nThe best result is obtained for an ANN configuration with three hidden layers, and 30, 8 and 3\nneurons for layers 1, 2 and 3, respectively. This configuration offers 85% classification accuracy,\nwhich means that the emotional estimation provided by the headset can be used with high confidence\nin real-time applications that are based on usersâ?? emotional states. Thus the emotional states given\nby the headsetâ??s API may be used with no further processing of the electroencephalogram signals\nacquired from the scalp, which would add a level of difficulty
Loading....